随着网络技术的快速发展和网络设备的快速增长,数据吞吐量也大大增加。为了解决蜂窝网络中回程瓶颈的问题并满足人们对延迟的要求,基于预测的结果,网络体系结构等网络体系结构旨在主动将有限的流行内容保持在网络边缘。同时,内容(例如,深度神经网络模型,与Wikipedia类似知识库)和用户之间的相互作用可以视为动态二分图。在本文中,为了最大程度地提高缓存命中率,我们利用有效的动态图神经网络(DGNN)共同学习嵌入了两部分图中的结构和时间模式。此外,为了更深入地了解不断发展的图表中的动态,我们提出了一个基于信息时代(AOI)的注意机制,以提取有价值的历史信息,同时避免消息陈旧的问题。结合了上述预测模型,我们还开发了一种缓存选择算法,以根据预测结果做出缓存决策。广泛的结果表明,与两个现实世界数据集中的其他最先进的方案相比,我们的模型可以获得更高的预测准确性。命中率的结果进一步验证了基于我们提出的模型而不是其他传统方式的缓存政策的优势。
translated by 谷歌翻译
在许多现实世界中,可以通过多个实例(例如图像补丁)表示或描述一个对象(例如,图像),并同时与多个标签相关联。此类应用可以作为多标签学习(MIML)问题进行表述,并在过去几年中进行了广泛的研究。现有的MIML方法在许多应用中都是有用的,但是由于多个问题,大多数方法都遭受了相对较低的精度和训练效率的影响:i)忽略了标签间相关性(即,与对象相对应的多个标签之间的概率相关性)被忽略了; ii)由于缺失实例标签而导致的其他类型的相关性,无法直接(或共同)学习实体相关性; iii)只能在多个阶段学习各种相互关系(例如,标签间相关性,固定相关性)。为了解决这些问题,提出了一个新的单阶段框架,称为广泛的多标签学习(BMIML)。在BMIML中,有三个创新的模块:i)基于广泛学习系统(BLS)的自动加权标签增强学习(AWLEL); ii)一个特定的MIML神经网络,称为可扩展的多构度概率回归(SMIPR); iii)最后,交互式决策优化(IDO)。结果,BMIML可以同时学习单个阶段的整个图像,实例和标签之间的不同相互关系,以提高分类精度和更快的训练时间。实验表明,BMIML的准确性具有高度(甚至比现有方法)高度竞争,甚至比大多数MIML方法甚至更快,甚至对于大型医学图像数据集(> 90k图像)。
translated by 谷歌翻译
掌握姿势估计是机器人与现实世界互动的重要问题。但是,大多数现有方法需要事先可用的精确3D对象模型或大量的培训注释。为了避免这些问题,我们提出了transrasp,一种类别级别的rasp姿势估计方法,该方法通过仅标记一个对象实例来预测一类对象的掌握姿势。具体而言,我们根据其形状对应关系进行掌握姿势转移,并提出一个掌握姿势细化模块,以进一步微调抓地力姿势,以确保成功的掌握。实验证明了我们方法对通过转移的抓握姿势实现高质量抓地力的有效性。我们的代码可在https://github.com/yanjh97/transgrasp上找到。
translated by 谷歌翻译
Pawlak粗糙集和邻居粗糙集是两个最常见的粗糙设置理论模型。 Pawlawk可以使用等价类来表示知识,但无法处理连续数据;邻域粗糙集可以处理连续数据,但它失去了使用等价类代表知识的能力。为此,本文介绍了基于格兰拉球计算的粒状粗糙集。颗粒球粗糙集可以同时代表佩皮克粗集,以及邻域粗糙集,以实现两者的统一表示。这使得粒度球粗糙集不仅可以处理连续数据,而且可以使用对知识表示的等价类。此外,我们提出了一种颗粒球粗糙集的实现算法。基准数据集的实验符合证明,由于颗粒球计算的鲁棒性和适应性的组合,与Pawlak粗糙集和传统的邻居粗糙相比,粒状球粗糙集的学习准确性得到了大大提高放。颗粒球粗糙集也优于九流行或最先进的特征选择方法。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem. Existing solutions typically approach dynamic scenes by applying geometry techniques or utilizing temporal information between several adjacent frames without considering the underlying background distribution in the entire scene or the transmittance over the ray dimension, limiting their performance on static and occlusion areas. Our approach $\textbf{D}$istribution-$\textbf{D}$riven neural radiance fields offers high-quality view synthesis and a 3D solution to $\textbf{D}$etach the background from the entire $\textbf{D}$ynamic scene, which is called $\text{D}^4$NeRF. Specifically, it employs a neural representation to capture the scene distribution in the static background and a 6D-input NeRF to represent dynamic objects, respectively. Each ray sample is given an additional occlusion weight to indicate the transmittance lying in the static and dynamic components. We evaluate $\text{D}^4$NeRF on public dynamic scenes and our urban driving scenes acquired from an autonomous-driving dataset. Extensive experiments demonstrate that our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background. Our code will be released at https://github.com/Luciferbobo/D4NeRF.
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译
Domain adaptation methods reduce domain shift typically by learning domain-invariant features. Most existing methods are built on distribution matching, e.g., adversarial domain adaptation, which tends to corrupt feature discriminability. In this paper, we propose Discriminative Radial Domain Adaptation (DRDR) which bridges source and target domains via a shared radial structure. It's motivated by the observation that as the model is trained to be progressively discriminative, features of different categories expand outwards in different directions, forming a radial structure. We show that transferring such an inherently discriminative structure would enable to enhance feature transferability and discriminability simultaneously. Specifically, we represent each domain with a global anchor and each category a local anchor to form a radial structure and reduce domain shift via structure matching. It consists of two parts, namely isometric transformation to align the structure globally and local refinement to match each category. To enhance the discriminability of the structure, we further encourage samples to cluster close to the corresponding local anchors based on optimal-transport assignment. Extensively experimenting on multiple benchmarks, our method is shown to consistently outperforms state-of-the-art approaches on varied tasks, including the typical unsupervised domain adaptation, multi-source domain adaptation, domain-agnostic learning, and domain generalization.
translated by 谷歌翻译